RidgeRun GStreamer Analytics Example: Synchronization Debug

From RidgeRun Developer Wiki


Follow Us On Twitter LinkedIn Email Share this page





Example: Remote Debug Logging for Pipeline Synchronization Issues

Synchronization issues are common in pipelines with live sources, often manifesting as dropped buffers when processing stages can’t keep up with real-time playback. This example shows how the RidgeRun GStreamer Analytics tool can be used to capture and analyze detailed logs for diagnosing this type of problem.


Pipeline Scenario

The test pipeline simulates a live video stream with intentional processing delays. These delays cause buffers to arrive late at the sink element, triggering warnings and drops:

RR_PROC_NAME=sync-test GST_DEBUG="DEBUG" GST_REMOTE_DEBUG="DEBUG" GST_TRACERS="rrlogtracer" gst-launch-1.0 videotestsrc is-live=true pattern=ball ! identity sleep-time=25000 ! identity sleep-time=25000 ! identity sleep-time=25000 ! autovideosink

The key environment variables in this setup are:

Table 1. Environment variables definition
Variable Purpose
RR_PROC_NAME Sets the name under which the process will appear in Grafana.
GST_DEBUG Controls what is captured in the local circular log file on the device.
GST_REMOTE_DEBUG Defines what logs are sent remotely to Grafana Drilldown for analysis.
GST_TRACERS Enables the RidgeRun custom tracer (rrlogtracer) to capture and structure GStreamer logs.

Note:

Here, GST_REMOTE_DEBUG=DEBUG ensures that all debug data is captured and streamed to Grafana, with finer filtering applied later in the Drilldown interface.

Alternatively, targeted filters can be defined upfront, for example:

GST_REMOTE_DEBUG="GST_PERFORMANCE:5,basesink:7"

Exploring Logs in Grafana Drilldown

Once the pipeline is running, logs are available for inspection through Grafana Drilldown.

The Drilldown interface organizes logs using a rich labeling system and provides several ways to narrow the focus to the most relevant data.


System and Process Context

Start by selecting the appropriate system in Drilldown to scope logs to the device running the pipelines.

Once the pipeline is running, open Grafana and navigate to the Drilldown panel:

  1. In the list of systems, select the system where the pipeline is running.
  2. Grafana will now display logs from that specific system.
RidgeRun GStreamer Analytics system selection

From there, the process label can be used to focus on logs from the sync-test process specifically.

Info
The collapsible Log Volume visualization gives an overview of how logs are distributed over time and across debug levels, making it easier to spot bursts of warnings or errors.
RidgeRun GStreamer Analytics logs distribution for a selected process

Pipeline and Category Filters

Logs can be refined even further using the pipeline label, which is especially useful on systems running multiple GStreamer pipelines simultaneously.

Two categories are particularly important for synchronization debugging:

  • GST_PERFORMANCE
    • Select the GST_PERFORMANCE category to focus on performance-related logs.
    • Highlights issues like dropped buffers.
    • "buffer is too late" messages show when a frame was missed, including the expected render time vs. the actual pipeline clock time.
    • These messages are key to understanding how far the pipeline has drifted from real-time playback.
RidgeRun GStreamer Analytics filtering by category label
  • basesink
    • Provides detailed sink timing data.
    • Pair this with Filter logs by string (with regex enabled) to isolate relevant information such as timestamps and jitter to know whether buffers arrived early or late.
    • This helps correlate buffer timing data with performance warnings to pinpoint the root cause.
buffer late|buffer is too late|base_time|got times|possibly|jitter
RidgeRun GStreamer Analytics filtering by string

Focusing on Latency

To evaluate latency behavior, search logs for the keyword have_latency.

These entries reveal how latency is calculated and distributed across the pipeline, providing insight into whether the configured pipeline latency is sufficient for real-time playback.

RidgeRun GStreamer Analytics filtering by string

Filtering by Level

Logs can also be filtered by debug level, making it easy to isolate warnings or errors.

Selecting only WARNING level messages provides a clean view of critical events such as buffer drops.

Info
Clicking on a log entry expands additional metadata, including labels, and associated filename, system, process, pipeline, element or pad information.
RidgeRun GStreamer Analytics filtering by level label

Visualizing Log Distribution

The Labels tab provides a graphical representation of how logs are distributed across systems, processes, categories, levels and other defined labels.

These visualizations make it easy to identify patterns, such as whether warnings spike at specific points in time or are tied to certain parts of the pipeline.

Filters applied in the Logs tab are also reflected here, ensuring consistent views across both modes. Click on any graph to inspect detailed data.

RidgeRun GStreamer Analytics - Grafana Drilldown Labels tab

Filtering Strategies

While this example uses GST_REMOTE_DEBUG=DEBUG to capture all logs and apply filtering in Grafana, in many cases it’s useful to define more focused filters when running the pipeline to reduce noise.

Filtering can also be adjusted dynamically at runtime through the RidgeRun custom Grafana plugin located in the provisioned dashboards:

  • Select the system and process of interest.
  • Modify the Filtered Debug Level field to change the filtering level without restarting the pipeline.
Use of RidgeRun Grafana custom plugin to change the filtered debug level

Closing Remarks

The synchronization case demonstrates how RidgeRun GStreamer Analytics transforms a complex debugging challenge into a manageable workflow. Instead of relying on raw log dumps and manual inspection, engineers can use Grafana Drilldown to filter, group, and visualize debug traces in real time. By aligning these logs with pipeline latency and jitter measurements, synchronization issues such as dropped or late buffers become immediately visible. This capability provides faster root-cause identification and more confidence when tuning live pipelines.